1 research outputs found

    Loss Scaling and Step Size in Deep Learning Optimizatio

    Get PDF
    Deep learning training consumes ever-increasing time and resources, and that isdue to the complexity of the model, the number of updates taken to reach goodresults, and both the amount and dimensionality of the data. In this dissertation,we will focus on making the process of training more efficient by focusing on thestep size to reduce the number of computations for parameters in each update.We achieved our objective in two new ways: we use loss scaling as a proxy forthe learning rate, and we use learnable layer-wise optimizers. Although our workis perhaps not the first to point to the equivalence of loss scaling and learningrate in deep learning optimization, ours is the first to leveraging this relationshiptowards more efficient training. We did not only use it in simple gradient descent,but also we were able to extend it to other adaptive algorithms. Finally, we usemetalearning to shed light on relevant aspects, including learnable lossesand optimizers. In this regard, we developed a novel learnable optimizer andeffectively utilized it to acquire an adaptive rescaling factor and learning rate,resulting in a significant reduction in required memory during training
    corecore